Husky Ropbot
Environment
Husky Ropbot
Husky Ropbot

Project information

Description

The goal of this project is to develop an algorithm for Robot Localization based on a Particle Filter (PF) for the Robot Clearpath Robotics Husky. The robot consists of two motors that are designed for a differential drive robot. A laser scanner is mounted on top To achieve this we use:

  • Laser data (lidar)
  • Robot's wheel odometry
  • Map
Clearpath Robotics Husky
Clearpath Robotics Husky
Map
Indoor map

Firstly, we implemented the prediction of the robot using the wheel odometry. Then, we created a likelihood field using the wheel odometry that can be used for the PF. Using the likelihood field we performed a correction step using laser scans before doing the resampling of the particles. Finally we derived the estimated pose based on the distribution of the particle set.

To implement the necessary parts of the PF algorithm the following framework it's used:

Pseudo-code PF
Particle Filter algorithm

where:

The for (lines 3-8) will generate new samples; the line 6 computes the importance weight; the line 7 will update the normalization factor; the line 8 inserts the generated particle into the set; the line 10 is in charge of normalizing the weights.

Likelihood field

First things first: we need to initialize the particle set randomly loaded in the environment (map); the framework above will use this distribution of particles to infere the robot's location. At each iteration of the cycle 1 particle is generated with random x and y, that doesn't fall on a wall, with random orientation.

To pre-integrate the likelihood field of the environment I considered the Beam-based Proximity Model: it calculates the probabilities based on the distance between the expected measurement (z_exp) and the actual measurement obtained from the laser sensor (z). Firstly, for each cell of the 'map', I compute the distance to the closest obstacle using the Breadth-First Search Algorithm. This distance can be seen as (z_exp - z) if the actual scan beam (z) falls into the cell. Having that value it's possible to compute:

$$ p_{hit} = 1 \cdot e^{-0.5 \left( \frac{(z - z_{exp})^2}{\sigma_{hit}^2} \right)} $$
$$ p_{rand} = \frac{1}{z_{max}} $$

Where: π’‘π’‰π’Šπ’• it's essentially a measure of how well the predicted measurement matches the actual measurement obtained from the sensor, πˆπ’‰π’Šπ’• represents the standard deviation of the hit probability distribution, 𝒑𝒓𝒂𝒏𝒅 is the (uniform) random measurement, 𝒛_π’Žπ’‚π’™ is the max range for the laser. By combining these values in a weighted sum, we obtain the total weighted probability for each cell of the map.

Pseudo-code PF
Likelihood field idea

Motion model

This task required to update all the particles whenever a new odometry has been received by the callback function (aka: the robot moves).

The particles update considering the robot's previous odometry; all particles move in unison with the robot’s motion, so they are more likely to converge towards the robot’s true pose. We implemented the Motion Model only if the robot was moving (thus, if its absolute value of linear or angular velocity obtained from the odometry is > 0.001). Then, I retrieve the Robot’s previous odometry (x,y,πœƒ), the Robot’s new odometry (x’,y’,πœƒβ€²) and the particle’s previous odometry (x_prev_part, y_prev_part,πœƒ_prev_part). With these quantities I can compute πœΉπ’“π’π’•πŸ, πœΉπ’•π’“π’‚π’π’”, πœΉπ’“π’π’•πŸ, they represent the incremental motion performed by the robot between consecutive time steps.

$$ \delta_{\text{rot}1} = \text{atan2}(y' - y, x' - x) - \theta $$ $$ \delta_{\text{rot}2} = \theta' - \theta - \delta_{\text{rot}1} $$ $$ \delta_{\text{trans}} = \sqrt{(x' - x)^2 + (y' - y)^2} $$

Incremental motion
Incremental motion

At this point we compute the predicted incremental motion with added noise, incorporating uncertainties in the robot's motion, namely: $$ \hat{\delta}_{\text{rot}1} = \delta_{\text{rot}1} + \epsilon_{a1} |\delta_{\text{rot}1}| + \epsilon_{a2} |\delta_{\text{trans}}| $$ $$ \hat{\delta}_{\text{trans}} = \delta_{\text{trans}} + \epsilon_{a3} |\delta_{\text{trans}}| + \epsilon_{a4} |\delta_{\text{rot}1} + \delta_{\text{rot}2}| $$ $$ \hat{\delta}_{\text{rot}2} = \delta_{\text{rot}2} + \epsilon_{a1} |\delta_{\text{rot}2}| + \epsilon_{a2} |\delta_{\text{trans}}| $$

Note: that the '𝜺' function introduces some variability sampling from a normal distribution according to the 𝜢𝟏, 𝜢𝟐, πœΆπŸ‘, πœΆπŸ’ parameters that represents various gains to set accurately according to the specific task (one of the improvements could be to refine these values).

After computing the predicted motion, we are ready to compute the update pose (x_update, y_update,πœƒ_update) for each particle according to these formulas:

$$x_{upd} = x_{prev_{part}} + \hat{\delta_{trans}} * cos(\theta_{prev_{part}} + \hat{\delta_{rot1}})$$ $$y_{upd} = y_{prev_{part}} + \hat{\delta_{trans}} * sin(\theta_{prev_{part}} + \hat{\delta_{rot1}})$$ $$\theta_{upd} = \theta_{prev_{part}} + \hat{\delta_{rot1}} + \hat{\delta_{rot2}}$$

Resampling

At this point we need to define each particle's weight combining the laser scan data and the pre-stored likelihood values.

I will skip some details that are available on the report published on GitHub .

For each particle, for each laser measurement (1 laser scan = ~720 measurements) we compute some probabilities that we combine in a weighted sum tot_prob.

Once we defined the total probability for each particle we nomralize all the weights to ensure that the sum of all the weights of all particles will be equal to 1, representing a valid probability distribution.

Finally, we can appy the Resampling: the particles with higher weights in the set will be kept, as they have more probability to resemble the actual location of the Robot; while the ones with lowest weight will be discarded.

To implement the Systematic Resampling we start to compute the cumulative sum to generate the PDF. We create a starting point by generating a random number beteween 0 and 1/n, where n is the number of particles in the set.

To draw the samples, we iterate n- random_sample_size times and use the threshold to determine which particles to select; If u, a pre-initialized treshold (according to the Systematic Resampling), exceeds the cumulative weight of a particle, we skip to the next particle. The selected particle is added to the S' array of resampled particles. During these iterations I compute the mean pose of all the particles to spawn a random_sample_size of random particles set around the mean pose.

At this point, where we defined and resampled our particles set we can infere the robot position by calculating the mean pose of all partcles and update teh robot's pose estimate with the average pose.

Estimated pose of the Robot
Estimated pose of the Robot

In the figure we can notice the true pose of the robot as the figure of the simulated robot, the particles as green arrows; the estimated pose is the mean pose of all the particles. As we can see the Particle filter resamples the majority of the particles around (~) the robot's true pose.